Dimension Reduction Techniques for Training Polynomial Networks

نویسندگان

  • William M. Campbell
  • Kari Torkkola
  • Sreeream V. Balakrishnan
چکیده

We propose two novel methods for reducing dimension in training polynomial networks. We consider the class of polynomial networks whose output is the weighted sum of a basis of monomials. Our first method for dimension reduction eliminates redundancy in the training process. Using an implicit matrix structure, we derive iterative methods that converge quickly. A second method for dimension reduction involves a novel application of random dimension reduction to “feature space.” The combination of these algorithms produces a method for training polynomial networks on large data sets with decreased computation over traditional methods and model complexity reduction and control.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Enhancing Efficiency of Neural Network Model in Prediction of Firms Financial Crisis Using Input Space Dimension Reduction Techniques

The main focus in this study is on data pre-processing, reduction in number of inputs or input space size reduction the purpose of which is the justified generalization of data set in smaller dimensions without losing the most significant data. In case the input space is large, the most important input variables can be identified from which insignificant variables are eliminated, or a variable ...

متن کامل

Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss

The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require ...

متن کامل

Training Process Reduction Based On Potential Weights Linear Analysis To Accelarate Back Propagation Network

Learning is the important property of Back Propagation Network (BPN) and finding the suitable weights and thresholds during training in order to improve training time as well as achieve high accuracy. Currently, data pre-processing such as dimension reduction input values and pre-training are the contributing factors in developing efficient techniques for reducing training time with high accura...

متن کامل

Training Mixture Models at Scale via Coresets

How can we train a statistical mixture model on a massive data set? In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations. A coreset is a weighted subset of the data, which guarantees that models fitting the coreset also provide a good fit for the original data set. We show that, perhaps surprisingly, Gaussian mixtures admit coresets of size poly...

متن کامل

Automatic Capacity Tuning of Very Large Vc-dimension Classiers

Large VC-dimension classiers can learn dicult tasks, but are usually impractical because they generalize well only if they are trained with huge quantities of data. In this paper we show that even very high-order polynomial classiers can be trained with a small amount of training data and yet generalize better than classiers with a smaller VC-dimension. This is achieved with a maximum margin al...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000